![]() Method and apparatus for determining colorimetry data of a color sample from an image of the same
专利摘要:
METHOD, COMPUTER PROGRAM OR A SET OF COMPUTER PROGRAMS AND APPARATUS. Embodiments of the invention relate to determining the color of a color sample from an image of the color sample. In one embodiment, a color sample capture card is provided which has known color samples printed on it (e.g., the tristimulus values of xyz). An image of the test color sample is then captured using available home equipment, such as a digital camera or a camera-equipped mobile phone, the image of which also contains the color sample capture card. Regression analysis is then performed using the RGB color samples in the image and the known XYZ colors therefrom to characterize the color capture response of the image capture device. In one embodiment, the color calibration characteristics are determined using N known calibration color samples, where N known calibration color samples, where N is less than the total number of known calibration color samples through the entire color space, After the (...). 公开号:BR112012017253B1 申请号:R112012017253-0 申请日:2011-01-17 公开日:2022-02-15 发明作者:Benjamin Buchanan Lings;Paul James Harrop;Peter Mark Spiers;Stewart Longhurst 申请人:Akzo Nobel Coatings International B.V.; IPC主号:
专利说明:
TECHNICAL FIELD [001] Embodiments of the invention relate to a method and a system for determining the color of a color sample from an image of the color sample. FUNDAMENTALS OF EXAMPLES OF THE INVENTION [002] When selecting a paint color to decorate a room, it is quite common for the customer to want to match the paint color to the color of a particular item to be placed in the room, such as an item of furniture or soft adornments such as pillows, sofas, curtains or the like. Painters typically offer large color palettes, and detailed color displays are provided by paint dealers to allow customers to select a color. Color swatch cards are available for the user to take home and check to see if they match the item the color is to be matched to. However, conventionally, this requires the customer to visit a paint store, collect color cards, take the color cards home, and then attempt to match the color swatches on the color cards to the color of the item to be matched. The customer must then typically return to the store, buy test pots of paint, return home, use the pots of paint, and then finally make their purchase decision. Furthermore, such conventional techniques are based on an individual customer's perception of what is the best color match for the ink. However, it is well known that color perception varies significantly from person to person, such that a color match for a sample selected by one person will not appear to be a color match for another person. [003] A potential solution to this problem is to try electronic color matching using a digital image. In this regard, home users today typically have many digital image capture devices at their disposal, in the form of digital cameras or camera-equipped mobile phones. [004] However, the color capture characteristics of typical available home image capture devices, such as digital cameras, mobile phones or the like, vary significantly from one device to another and thus the exact capture of the color is typically not possible. Specific spectrophotometer devices are available that can accurately measure color, but are generally too expensive for most home consumers. Typical home image capture devices capture an image and represent the color using RGB pixel values. Typically, 16-bit or 24-bit RGBs are used. Where 16-bit values are used, each of the red and blue channels typically has five bits associated with it, whereas the green channel has six bits associated with it. In this respect, the human eye is more sensitive to green colors than to red and blue colors, and thus a greater number of green colors are detectable. Where 24-bit colors are used, this then equates to eight bits, or 256 colors, per color channel. [005] However, because of the differences noted above in image capture devices when capturing color accurately, and also in image reproduction devices such as monitors and the like when reproducing color, the RGB values of the color do not are considered as default values. Instead, there are fixed color-defining standards established by the Commission International De L'Eclairage (CIE), such as the CIE X, Y, Z tristimulus values or the so-called CIELAB values (L*, a*, B*). CIELAB values are related to XYZ tristimulus values using a known mathematical formula. The tristimulus values of XYZ themselves are related to the wavelengths present in a particular color. PREVIOUS TECHNIQUE [006] The problem of calibrating an image capture device by relating the RGB values thus captured to standard values, such as tristimulus values of XYZ or values of CIELAB, was previously resolved in US 5,150,199 and WO 01/25737. [007] More particularly, US 5,150,199 (Megatronics, Inc.) describes a method for converting or correlating numerical RGB values developed by different instruments into standard tristimulus values. In this regard, iterative regression analysis is used to determine the initial functions that convert the RGB values generated by an initial color video camera into XYZ standard tristimulus values. Regression analysis is then used to determine the additional functions that convert the RGB values generated by the camcorder, which displays additional colors other than the initial colors, to the default values of XYZ. The functions generated for the camcorder are then used to convert the RGB values generated by the camcorder in the image of a colored object to the standard XYZ values. [008] More particularly, in US 5,150,199, RGB values and XYZ values are determined from a set of color samples. RGB values are determined using a conventional video camera and digitizing equipment, which can detect and record numerical values for the RGB components of each color. The XYZ values of the color samples are determined using a colorimeter or a conventional spectrophotometer. [009] When these data are captured, as a first step in the iterative regression analysis, the analysis is performed to verify if X is a function of R, if Y is a function of G and if Z is a function of B. This Regression analysis uses so-called "grayscale" values in the color samples, where the R, G, and B values are approximately equal. The resulting functions are power functions. Then, in Step 2, the multivariate analysis of the power functions is performed, determining the functions that relate each of X, Y and Z individually to all R, G and B. In US 5,150,199, an additional technique that adapts the Y function as a chroma function of red is also described, although it is not pertinent to the present invention. [0010] Thus, US 5,150,199 describes a basic technique to characterize the color capture transfer function of an image capture device, in order to allow the RGB values captured by the device to be translated into tristimulus values of X Y Z. However, as noted, in order to use the provisions of US 5,150,199, in order to characterize a captured image, the user must have access to a colorimeter or spectrophotometer to measure the color of the color samples being generated by the image capture device being characterized. Typically, in the usage scenario outlined in the Fundamentals section above, a user will not have access to such specific equipment, such as a colorimeter or spectrophotometer. Thus, the method of US 5,150,199 is largely experimental. [0011] However, WO 01/25737 refers in part to these drawbacks of US 5,150,199. WO 01/25737 also describes matching captured RGB values to standard colorimetric data, and particularly matching them to CIELAB values. The mathematical analysis described in WO 01/25737 is substantially the same as that described in US 5,150,199, although WO 01/25737 introduces the concept of a known color calibration standard whose colorimetric data is known. The unknown color to be measured is then imaged at the same time as the calibration standard. The calibration pattern contains, in one example, 65 known colors, and in another example, 37 known colors, distributed throughout the color space. By capturing the RGB values from the color calibration, it is possible to calculate the mathematical model needed in order to convert the measured signals of known colors into colorimetric data (eg CIELab values). Once this model is obtained, the colors (in the CIELab color space) of any unknown colors in the image can then be determined from their RGB values. [0012] WO 01/25737 describes that the image of the color sample to be determined is captured at the same time as the calibration standard using, for example, a flat surface scanner or a digital camera. The captured image is then processed to determine the unknown colors in the image. The provision is described as being particularly useful in the car repair industry. In this regard, the color of a car to be repaired is measured using an electronic imaging device. Prior to or concurrently with this, a recording is made of a panel on which different calibration colors have been applied. The colorimetric data of a car's color is then calculated, and a color formula that provides a color identical to the color of the car to be repaired is found. The color formula is prepared in an applicator and is then applied. [0013] WO 01/25737 therefore describes a device for use in professional situations such as car repair shops or paint shops. Thus, WO 01/25737 is not focused on all related problems, such as where the lighting varies across the captured image, where the image is not in the correct orientation, or where the color swatch actually contains different colors spatially mixed along. Sample. On the other hand, in a domestic situation, all these anomalous situations can occur. [0014] Other prior art includes WO 02/13136, WO 2008/108763 and WO 2004/028144. BRIEF DESCRIPTION OF EXAMPLES OF THE INVENTION [0015] Embodiments of the invention are focused on some of the problems noted above and relate to determining the color of a color sample from an image of the color sample, wherein the image is typically (although not exclusively) captured by a unqualified user using non-specific equipment. In one embodiment, a color sample capture card is provided which has color samples of the known color (e.g., the tristimulus values of XYZ) printed on it. An image of the test color sample is then captured using available home equipment, such as a digital camera or camera-equipped mobile phone, the image also containing the color sample capture card. In one embodiment, the image is then transmitted to a remote color determination service for color determination of the color sample. Regression analysis is performed using the RGB color samples in the image and its known XYZ colors to characterize the color capture response of the image capture device. By characterizing the image capture device, the XYZ color of the unknown color sample can be determined from its RGB color in the image. When the color of XYZ is known, the color can then be accurately matched to an ink color palette to determine an ink color that corresponds to the unknown color. In addition, complementary colors in the paint palette can be identified. [0016] By performing the above, in one realization, differences in spatial brightness across the image can be explained. In another embodiment, card placement errors in the image are also corrected prior to processing by using image deskew and rotation transformations. In an additional realization, the XYZ color is calculated in two passes, using the information from the first pass to inform the second pass. However, in a further embodiment, in which the color swatch actually contains more than one color, individual colors are determined using agglomeration techniques to identify the dominant colors in the swatch. [0017] In view of the above, a first aspect of the invention provides a method, which comprises: receiving the first image data that is related to an unknown color sample, for which the colorimetry data must be determined; and receiving second image data that relates to a plurality of known calibration color samples, for which the colorimetry data is already known. A plurality of color calibration characteristics that relate the color measurements of the known calibration color samples of the second image data to the corresponding known colorimetry data of the calibration color samples is then determined; and the colorimetry data of the unknown color sample is calculated depending on the color measurements thereof from the first image data and the determined color calibration characteristics. In particular, the color calibration characteristics are determined using N known calibration color samples, where N is less than the total number of known color samples across the entire color space. In some circumstances, this may provide more accurate results. [0018] More preferably, in the above embodiment, the N known calibration color samples are those N samples that are closest in the color space to an estimated color of the unknown color sample. This effectively allows the color space to be "approximated" when determining color calibration characteristics, so that the part of the color space that contains the unknown color sample is characterized more accurately. [0019] Within the above realization, the estimated color can be obtained by determining a first set of calibration characteristics using all known calibration color samples available and by calculating the estimated color using the first set of calibration characteristics. A "second pass" of processing is then performed, using the N closest known calibration color samples to the estimated color. In this way, a two-pass processing approach is used, which allows the overall color space to be characterized, and then the part of the space containing the unknown color swatch to be characterized in more detail, to provide more accurate results. [0020] Alternatively, the N known calibration color samples are those N samples used within a confined color space that is known to be represented by the second image data. In this regard, it may be that known calibration color samples are known to be within a confined part of the color space, for example they may all be red or blue. That is, when trying to match a red color, the user uses known calibration color samples that are predominantly red or nearly red, thereby restricting the part of the capture device's color space that needs characterization. [0021] In a further alternative, the N known calibration color samples are those N samples that have their color values measured from the second image data that are most similar to the measured color value of the unknown sample from the second image data. first image data. For example, the N known calibration color swatches that have the closest RGB values to the unknown color swatch can be used. [0022] Within the above embodiments, N is preferably comprised in a range from substantially 5 to substantially 250 or, more preferably, substantially from 10 to substantially 100 or, even more preferably, substantially from 20 to substantially 85 or, even more preferably preferably substantially from 30 to substantially 70, or even more preferably from substantially 40 to substantially 60 or, most preferably, at or around 50. In other embodiments, N different numbers or ranges may be used. [0023] In one embodiment, first image data and second image data from a remote user are received over a telecommunications network. In addition, information that is related to a corresponding ink color can be provided to the user through the telecommunications network. In this way, ink colors corresponding to an unknown color sample can be provided using a remote service. [0024] In one embodiment, the first image data and the second image data are received as any one of: i) an e-mail message; ii) an MMS message; and/or iii) as image data on a web page. In addition, information that is related to the corresponding ink color can also be provided as any of: i) an e-mail message; ii) an MMS message; iii) an SMS message and/or iv) data on a web page. Such communications protocols facilitate the provision of a remote ink matching service that is familiar to users and easy to use. [0025] In one embodiment, the first image data and the second image data are produced by the user using an image capture device; wherein the image capture device is preferably any one of: i) a digital camera; ii) a mobile phone equipped with a camera; and/or iii) a digital video camera. Again, such equipment is ready to be delivered to a typical user, and the user is familiar with the operation of such equipment. [0026] In one embodiment, the determined colorimetry data and/or the known colorimetry data are tristimulus values of XYZ. XYZ tristimulus values define fixed and specific default colors. [0027] In one embodiment, colors complementary to the corresponding color can be determined, and information that is related to the determined complementary colors is provided to the user. After the complementary color schemes are provided, the colors can then be determined more easily. [0028] In one embodiment, at least the second image data is oriented in a known orientation to allow recognition of known calibration color samples. Automatic orientation of the image data allows for ease of use for the end user as the second captured image data does not need to be captured in any specific required orientation. [0029] In this embodiment, the guidance preferably comprises performing edge detection to identify the position of the set of known calibration color samples in the second image data. Furthermore, the guidance may further comprise identifying a plurality of predetermined points that are related to the set of known calibration color samples in the second image data. Once these known points are identified, a perspective transformation can be applied to the second image data depending on the position of the identified points to deskew the image from the set of known calibration color samples. [0030] Furthermore, in this embodiment, the orientation may further comprise identifying predetermined rotation orientation markings that are related to the set of known calibration color samples in the second image data. The second image data can then be rotated depending on the location of the identified rotation orientation marks, in such a way that the known calibration color samples are placed at a known position in the second image data. [0031] In one embodiment, differences in brightness across known calibration color samples can also be compensated for. This allows image data to be captured under uncontrolled lighting conditions where there can be uneven lighting across the image. Again, this allows for ease of use for the end user. [0032] Within this embodiment, compensation may comprise determining a first set of one or more functions that have a first set of calibration coefficients, wherein one or more functions relate the measured colors of known calibration color samples of the second image data to the known colorimetry data of the calibration color samples and the known position of each known sample in the image. The determined functions are then analyzed to find a second set of functions that have a second set of calibration coefficients. The first and second sets of calibration functions and coefficients are then used to calculate the colorimetry data of the unknown color sample. [0033] Furthermore, more preferably brightness compensation also comprises, prior to determining the first set of functions, determining a precursor set of functions that have a precursor set of calibration coefficients that relate the measured colors of the color samples of the known calibration data of the second image data of the second image data to the known colorimetry data of the calibration color samples without regard to the position of the known color samples. The precursor set of calibration coefficients is then used as part of the first set of calibration coefficients in determining the first set of one or more functions. [0034] In one embodiment of the invention, a clustering algorithm can be applied to the pixel values of pixels representing the unknown color sample in the first image to determine the number of colors in the sample image, and a color is identified for each identified cluster. With such provision, if the unknown color sample contains more than one color, then the dominant color can be identified and/or the individual colors can be identified separately. [0035] Within this realization, pixel values are first calibrated using the color calibration features. This has the effect of ensuring that the agglomeration algorithm is operating on the actual colors in the color swatch. [0036] The agglomeration algorithm in use can then operate by: i) calculating the average value of pixels in a cluster; ii) determining the number of pixels within a predetermined threshold distance from the mean value; and then iii) increasing the number of clusters if the given number of pixels is less than a predetermined fraction of the number of pixels in the first image data that is related to the unknown sample. In this way, it becomes possible to identify different colors in the sample, in which each identified cluster is related to a corresponding individual color. [0037] In order to ensure that dominant or important colors in the sample are detected, the realization may also filter clusters to remove such clusters from the consideration that they do not contain a threshold number of pixels within a second threshold distance of the cluster average. In this way, clusters of colors with only a small number of pixels are not identified as dominant or important colors in the sample. [0038] From a second aspect, the present invention further provides an apparatus, which comprises: at least one processor; and at least one memory that includes the computer program code, wherein the at least one memory and the computer program code are configured to, with at least one processor, cause the apparatus to perform at least the following: i) receive the first image data that is related to an unknown color sample, for which colorimetry data must be determined; determining a plurality of color calibration characteristics that relate the color measurements of the known calibration color samples of the second image data to the corresponding known colorimetry data of the calibration color samples; and iii) calculate the colorimetry data of the unknown color sample depending on the color measurements thereof from the first image data and the determined color calibration characteristics; characterized in that the color calibration characteristics are determined using N known calibration color samples, where N is less than the total number of known calibration color samples across the entire color space. [0039] Additional aspects and features of the present invention will become apparent from the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS [0040] Additional features and advantages of the examples of the invention will become apparent from the following description of the specific embodiments of the invention, given by way of example only and with reference to the accompanying drawings, in which like numerical references refer to like parts and in which: [0041] Figure 1 is a block diagram of a system according to an embodiment of the invention; [0042] Figure 2 is a drawing of a color calibration sample card used in an embodiment of the invention; [0043] Figure 3 is a flowchart of a process performed in an embodiment of the invention; [0044] Figure 4 is a flowchart and associated drawings illustrating an image orientation process used in an embodiment of the invention; [0045] Figure 5 is a flowchart depicting a color calibration process used in an embodiment of the invention; Figure 6 is a flowchart illustrating a multi-pass process used in an embodiment of the invention; Figure 7 is a flowchart illustrating part of a spatial brightness calibration process used in an embodiment of the invention; [0046] Figure 8 is a flowchart illustrating an agglomeration process used in an embodiment of the invention; [0047] Figure 9 is a diagram demonstrating the use of the agglomeration process used in an embodiment of the invention; [0048] Figure 10 is another diagram illustrating the use of the agglomeration process used in an embodiment of the invention; [0049] Figure 11 is a photograph of an experimental cast of the color calibration sample used for testing an embodiment of the invention; [0050] Figure 12 is a graph showing a gray scale power adjustment obtained from a calibration process during a test of an embodiment of the invention; [0051] Figures 13 to 15 are graphs of the power function regression fits for X, Y and Z based on the power functions shown in Figure 12; [0052] Figure 16 is a graph of a grayscale fit that uses a second-order polynomial; [0053] Figure 17 is a graph of a gray scale fit that uses a fourth-order polynomial confined to zero intercept; and [0054] Figures 18 to 20 are graphs of test results obtained from a realization in which a second processing pass is performed. DESCRIPTION OF SPECIFIC ACHIEVEMENTS [0055] Various examples of the invention will now be described with respect to the accompanying figures. 1. FIRST PERFORMANCE — REGRESSION ANALYSIS USING REDUCED COLOR SPACE [0056] Figure 1 is a block diagram of a system according to a first embodiment of the present invention. The system has user-side elements and end-server-side elements. User-side elements are used to capture an image of the color sample to be determined, along with an image of the calibration color samples whose colorimetric data is known. Server-side or end-server elements are related to processing elements that receive the image data, process the image data, determine the color of the unknown sample color, match the color to an ink palette, and then return the corresponding color from the palette to the user. [0057] In this regard, the objective of the first embodiment of the present invention is the provision of a system that allows a home customer or other user to accurately identify the color of an unknown color sample. In order to do this, the user obtains a calibration color swatch card, for example, by mail or by visiting the paint retail store where paints are available. The calibration color swatch card has a cut-out portion on which an object whose color is to be determined can be placed. The user then captures an image of the calibration color sample card, with the object whose color is to be determined in the cropped portion, using readily available image capture devices, such as a digital camera or camera-equipped mobile phone. The image is then transmitted by the user, for example by e-mail, multimedia messaging service (MMS) or using a web interface, to the final server, where it is processed, the color of the unknown color swatch is determined and information is returned to the user regarding a corresponding ink color. In addition, information about complementary ink colors to make up an ink color scheme can also be returned to the user. [0058] Figure 1 illustrates the elements of such a system in more detail. Starting at the user's end, the user obtains the 24-calibration color swatch card, for example, from a local ink retailer or via mailing. The calibration color sample card 24 has therein a number of individual color samples 242, distributed spatially across the card, which colors of the color samples 242 are also distributed across the color space. The calibration color sample card 24 has a cut-out portion 244, shown in Figure 1 located in the middle, but which, in other embodiments, can be found anywhere on the card, which place, in use, an object to be tested is placed, or the card is placed over the object to be tested, so that part of the object to be tested shows the cut portion 244. Additional details of the calibration color swatch card 24 will be described later with respect to the Figure 2. [0059] In use, as noted, the user places the calibration color swatch card 24 over the object whose color is to be determined. The user then uses an image capturing device such as a digital camera or a mobile phone provided with a camera to obtain an image of the calibration color sample card 24 with the unknown color sample to be determined also located in the image. . As shown in Figure 1, a user image capturing device 12 such as a digital camera can be used, or a user mobile device 14 equipped with an image capturing device such as a built-in camera. [0060] Once the user has captured the image, the user must then transmit the image to the end server 10 for image processing. Several different transmission technologies may be used to transmit the image data to the end server 10, and embodiments of the invention are not limited to those described. For example, the user can upload the image captured from the digital camera 12 to his computer 16, where the computer 16 is connected to the Internet 22 through a local network, such as a Wi-Fi router 18. Thereafter, the user can use the computer 16 to send an e-mail of the image as an attachment to an e-mail address that is related to the end server 10. [0061] Alternatively, the end server 10, through a network interface, can provide a dedicated web page that can be downloaded by the computer 16 and displayed by a browser program and onto which the image data can be placed, in so as to be returned to the final server 10. [0062] An alternative route to the end server is provided, in which the user uses a mobile phone to capture the image. Some mobile devices, often known as smartphones, have Wi-Fi functionality and can be used to send emails or access web pages in the same way as a laptop or desktop computer. In this case, the user's mobile device is used as a portable computer, and the captured image can thereby be sent via e-mail or as data embedded in a web page to the end server. Alternatively, the user's mobile device may use its cellular radio interface to send the image data to the end server 10. In this case, the image data may be sent, for example, as a message from the message sending service. multimedia (MMS) via the cellular network 26 to a mobile port 20, which then transmits the image data to the end server 10. In this regard, a private contact number can be provided and communicated to the user (printed, for example, on the calibration color swatch card 24), to which MMS messages can be sent. [0063] The final server 10 comprises a network interface 102 connected to the network 22 to receive image data from users and transmit the corresponding color data thereto, as will be described. End server 10 further comprises a processor 104 which executes programs to perform color determination and to generally control the operation of end server 10. Working memory 106 is provided for use by the processor in which data may be temporarily stored. [0064] Also provided on the end server 10 is computer readable media 108 which forms a long-term storage on which data and programs can be stored. For example, the computer readable media 108 may be a hard disk drive or it may, for example, be solid state storage. A series of control programs are stored on computer readable media 108. In this first embodiment, a color matching control module 104 is provided which controls the overall operation of the system and calls on other modules to perform operations such as and when required. In the first embodiment, a calibration module 118 is additionally provided which receives control commands from the color matching control module 114, as appropriate, and is executed by the processor 104 to perform a calibration function and in particular in order to perform the necessary regression analyzes in order to characterize the color capture characteristics of the image capture device used by the user. Additional details of the operation of the calibration module 118 will be provided later. [0065] In other embodiments, additional modules may be provided, such as the image orientation module 116 or the agglomeration module 120. The operation of these additional modules will be described later with respect to the relevant embodiment. [0066] Furthermore, additional computer readable storage media 110 is provided on the end server 10, which may also take the form of a hard disk, solid state storage or the like. In this regard, the second computer-readable storage media 110 may in fact be the same media as the media 108 and may be, for example, a partition of the same hard disk that constitutes the first storage media that can be computer readable 108. The second computer readable storage medium 110, however, stores a color database comprising colorimetry data that relates to color samples on the calibration color sample card 24 Various sets of such data may be stored, relating to different calibration color swatch cards 24 that may be available. For each calibration color swatch card 24, the card ID is stored and then, for each known color swatch on the card, the known XYZ tristimulus values are stored, along with the x, y, location coordinates of the color swatch that has those tristimulus values on the card. There will therefore be as many sets of coordinate values and associated tristimulus values as there are patches of color swatch on the calibration color swatch card 24. [0067] Figure 2 illustrates the calibration color card sample 24 in more detail. Particularly, the calibration color swatch card 24 has a border 248 on an outer edge thereof and then has patches of the color swatch of the known color printed thereon. The color swatch patches are arranged in such a way that the patches 250 around the outer edge of the color swatch patch region are grayscale patches, that is, they range from black through various grayscale colors to grayscale. White. These must be captured by an image capture device such as a digital camera with substantially equal RGB values. They are useful in performing spatial brightness correction, as will be described in a later realization. [0068] The color swatch patches 242 also located from the edges of the calibration color swatch card 24 are the color patches, each of which has a known tristimulus color value. In this regard, the color patches should be printed as accurately as possible to the desired tristimulus values. Alternatively, calibration color cards can be printed and then each patch is measured to determine its XYZ values, using, for example, a spectrophotometer. The patch colors of the color swatch 242 are preferably distributed throughout the entire color space. However, in other embodiments to be described later, colors can be concentrated within a particular area of the color space. [0069] The card 24 is also provided with some sort of identification mark 246, which may be a bar code or some other loyalty mark, such as a printed name, symbol or the like. This is used to identify which card is being used by the user in such a way that the correct color card data can be selected for use. [0070] Finally, the calibration card 24 has a cut-out portion 244, shown here in the middle. However, the position of the cut portion is not important, and it can be located anywhere on the card and even on the edges. Furthermore, it is not essential that a cut portion be included; in this regard, the calibration color sample card 24 may simply be placed next to an object or the color sample which is to be determined, although this is less preferable. [0071] In use, as noted, the user obtains the calibration color swatch card 24, for example, from a paint retailer, and then places the calibration card 24 in such a way that the cut portion is find out about a color to test, for example the color of a pillow, a curtain, an item of furniture or the like. In this regard, the card 24 should be placed on top of or against the object whose color is to be tested in such a way that the color shows the cut portion 244. Using a mobile phone, digital camera or the like, the user then generates a stationary image of the object to be tested with the color capture card in the image and sends it to the final server 10, using the various communication routes described previously, such as MMS, email or using access to the web. [0072] Figure 3 shows the process running on the final server 10 in more detail. [0073] First, the image data 32 sent by the user is received on the network interface 102 of the end server 10. The end server 10 is controlled by the color matching control module 114 which runs on the processor 104. image are received, the color matching control module 114 first optionally performs image processing to locate and orient the calibration color sample card 24 within the image 32. This is performed in block 3.2 and is optional, since it may happen that, depending on the instructions given to the user, this step is not required. For example, the calibration color sample card 24 may come with instructions for the user to capture an image in such a way that the position of the card within the image is not skewed. In addition, the user may be instructed to develop the image in such a way that the image is solely from the calibration card 24 in a known rotation orientation, before being sent to the final server 10. If such instructions are provided to the user and the user performs the same, then there will be no need to perform any routines for locating or orienting the card. In this case, therefore, the received image 32 will be an image solely of the calibration card with the unknown sample in a known orientation, i.e., it will be a card image 34 of the card and the sample. [0074] Once an image of the card 34 has been obtained, the color matching control module 114 controls the processor 104 to launch the calibration module 118 in order to perform regression analysis to characterize the characteristics of the color capture of the user's image capture device. The regression analysis used in the present embodiment is substantially the same as that described previously in 5,150,199 and WO 01/25737, and is shown in more detail in Figure 5. With respect to Figure 3, regression analysis to characterize the device is performed. in block 3.4, with reference to the arrangement of the calibration card 35, known from the data of the color card 112, stored in the color database 110. [0075] The iterative regression algorithm involves two individual processing steps as follows: [0076] Step 1: Determine three relationships between each of the measured components of R, G and B and the known components of X, Y and Z using the grayscale color swatches on the calibration color swatch card 24, i.e. :• X as a function of R (called function R1).• Y as a function of G (called function G1).• Z as a function of B (called function B1). [0077] A power curve fit can be used on the grayscale data to obtain the R1, G1, B1 ratios in Step 1 above. You can also use second-, fourth-, or higher-order polynomial curve fits. [0078] Step 2: Determine the linear and multivariant relationships between each of the known components of X, Y and Z and the three functions determined in Step 1 above, that is:•X as a function of R1, G1, B1 (called function X1 ).•Y as a function of R1, G1, B1 (called functionY1).•Z as a function of R1, G1, B1 (called functionZ1). [0079] Step 2 in the algorithm performs multivariant regression of X, Y and Z against the power curve fits R1, G1 and B1 obtained in Step 1, that is: X = f(R1, G1, B1)Y = f (R1, G1, B1)Z = f(R1, G1, B1)orX = a + b.R1 + c.G1 + d.B1Y = a + b.R1 + c.G1 + d.B1Z = a + b .R1 + c.G1 + d.B1 [0080] where a, b, c and d are constant coefficients. The three multivariate regression fits of X, Y, and Z are denoted as X1, Y1, and Z1, respectively. [0081] Figure 5 shows what was seen above in more detail. Particularly, the process in Figure 5 should be executed as block 3.4 in Figure 3. [0082] First, in block 5.2, as discussed, image data from a color card of known orientation is received. It is then necessary to identify the color card used in block 5.4, and this is performed using the identification tag 246 located on the calibration card 24. That is, recognition of the identification tag 246 is performed, and this tag is then used as an index to select the appropriate set of color card data from the color card database 110. [0083] Next, the first step of the aforementioned algorithm is started, extending from blocks 5.6 to 5.14. That is, in block 5.6, a process circuit is started to read data from the image at known positions in the image. That is, in block 5.6, each grayscale sample at the known position (x, y) on calibration card 24 has its RGB values measured from the image in block 5.8, and then the XYZ tristimulus values for that sample in same position (x, y) are checked from the database in Step 5.10. This process is repeated for all grayscale swatches in the image, which, with the calibration card 24, lie on the outer edge of the color swatches, such as swatches 250. In alternative embodiments, this step need not be limited. grayscale swatches, and the other color swatches could also be additionally or alternatively used. [0084] At the end of the processing that constitutes blocks 5.6 to 5.12, therefore, for each known color or grayscale sample in the image, the XYZ tristimulus values will have been obtained from the appropriate color card data in the database 110, and the RGB values of that color swatch in the image will have been measured. The corresponding RGB and XYZ values are stored associated with each other in memory 106. For example, it is possible to plot the measured RGB values for each known sample against the known XYZ values of that sample in a graph, as shown in Figures 12, 16 and 17. [0085] Once the RGB values have been measured and the corresponding XYZ values have been verified from the color database, in Step 5.14, Step 1 of the algorithm referred to above is performed, to determine the X values as a measured function of the values of R, the values of Y as a function of the measured values of G, and the values of Z as a function of the measured values of B. This step is performed using a power fit or a polynomial fit, to obtain a function that relates X to R, Y to G, and Z to B. Typically, a power fit will provide an equation of the form: [0086] where the coefficients αx, y z. and βx, y, z. characterize their relationships. [0087] Figures 12, 16 and 17 illustrate examples of curve fits that were obtained for experimental test data performed on images captured of a test calibration sample array 1102 shown in Figure 11. Figure 11 shows an array patchwork of color swatch 1102, along with grayscale patchwork 1104, located at the bottom of the array. The 1102 color swatch patches comprise 256 randomly arranged pattern colors, including six grayscale patterns. Grayscale patchwork 1104 comprises 16 grayscale colors ranging from black to white. [0088] In order to test the process, the experimental test setup of Figure 11 was illuminated using a D65 light and an image was captured using a state-of-the-art digital camera (Cannon Powershot Pro 90IS). The XYZ tristimulus data of the color patches in the test array were known in advance, indexed by the position of the patch in the array. With these data, it was possible to plot the measured values of R, G and B for each flap against the known values of XYZ for each flap in the test, as shown in Figures 12, 16 and 17. It should be noted that the graphs of the data in each of Figures 12, 16 and 17 are identical. What differs is the curve fit that was applied. Particularly in Figure 12, a power adjustment was used, according to the relationship described above. However, as noted, it is also possible to use a polynomial fit other than a power fit, and Figure 16 shows a second-order polynomial fit, while Figure 17 shows a fourth-order polynomial fit, where the function is confined to intercept at zero. As will be described later, if a power fit or polynomial fit is used, the results will be substantially identical, and there appears to be little, if any, advantage in using a polynomial fit over a power fit. [0089] Once a curve fit has been performed to provide the functions noted above, next, in block 5.16, the multivariate regression of X, Y and Z is performed against the obtained functions, to obtain the coefficients relating X to R, G and B, Y to R, G and B and Z to R, G and B, as noted in Step 2 above. Figure 13 illustrates a plot of known X versus regression fits of R1 and X1, while Figure 17 shows a plot of known Y versus regression fits of G1 and Y1, and Figure 15 shows a plot of Z known against the regression fits of B1 and Z1. This results in constant coefficients (a, b, c and d in Step 2 above) that help characterize each of X, Y and Z as a function of R, G and B, as described above. Once these coefficients have been found, that is, the coefficients from Step 1 and Step 2 of the above algorithm, they are stored, and later characterize the color capture function of the image capture device used by the user. Using these coefficients, it is then possible to find the color of the unknown sample in the image, from its RGB values. [0090] Returning to Figure 3, therefore, in block 3.4, the calibration process noted above is performed, and this returns a set of calibration coefficients 36, which can then be used for subsequent color determination. [0091] First, however, it is necessary to determine if there is any dominant color in the known color swatch, and this is performed in block 3.6. For example, the RGB pixel values representing the unknown sample can be examined to determine if there is a dominant RGB value. Alternatively, if there is no dominant RGB value, where a web interface is then being used in block 3.10, a user may be asked to choose a color to calibrate. In block 3.12, the chosen color is then calibrated. In a later embodiment, an agglomeration process will be described, which can identify multiple colors in the unknown sample and return a calibrated color for each of them. [0092] In block 3.12, the chosen color is calibrated, using calibration coefficients 36. That is, the RGB values are applied to the equations found in block 3.4 using calibration coefficients 36. This process provides the tristimulus value of XYZ of the chosen color. [0093] Once it has found the XYZ values of the unknown color swatch (or the dominant value chosen in the color swatch if there is more than one color), the color matching control module 114 will then act to find the color closest in an available color palette, in block 3.14. In this regard, the color palette data 45 is available to the color matching control module 114 and is stored in the color database 110. The closest color check is performed using a color difference measurement and, when comparing the color that was determined for each color in the palette using the difference measure, the color from XYZ with the smallest difference is chosen. Several distinct difference measures may be used, but in embodiments of the invention it is preferable to use the CIE Delta E measurements. In particular, the original CIE Delta E (1976) color difference measurements may be used or, in another embodiment , CIE Delta E (2000) measurements can be used. In an additional realization, Delta E (2000) can be used, but with different weighting factors. [0094] The color matching process in block 3.14 returns a matching ink color which is the ink color in the palette that is closest to the XYZ color determined from the test sample. This ink color information 42 is then again provided to the user via the network interface 102 via the network 22. For example, where the user transmitted the image to the end server 10 by MMS using a mobile device, the network interface 102 can formulate a short message service (SMS) or MMS message to resend the ink color information to the user's mobile device. Alternatively, where the user has sent an e-mail to the end server 10, the network interface 102 may formulate an e-mail in response with the ink color information. Where a web interface is used, a web page can be sent to the user for display by a user's web browser, providing ink color matching information. [0095] Finally, in some embodiments of the invention, in addition to returning information about the ink color 42, in block 3.16, the end server 10 also acts to find a color scheme that complements the given ink color 42. For example , there are several methodologies to determine which color schemes complement each other. For example, a color that is 120° away from a first color on the CIELAB color wheel is often considered to be a complementary color. Furthermore, a color that is 180° away from a first color on the CIELAB color wheel is also considered to be complementary. Therefore, in block 3.16, such complementary color determination techniques are used to determine information about the color scheme 44, which is also returned to the user. [0096] Therefore, in the first realization, a user can take a digital photograph using his mobile phone or his digital camera, of an object whose color is to be determined. The photograph is taken by placing the calibration color sample card 24 on, beside, or close to the object in such a way that the calibration color sample card 24 and the object are captured in the image. The user then sends the image through a telecommunications network from his home to the end server. In this regard, contact details such as an email address, an MMS number or a web address can be provided on the back of the calibration color sample card 24. The end server 10 receives the image , processes the image as described to determine the actual color of the object to be determined, and then matches the color with an ink palette to determine an ink color corresponding to the object. Information about the corresponding ink color is then returned in a response to the user over the telecommunications network. The response can be, for example, via e-mail, SMS, MMS or by transmitting an appropriate web page for viewing in a browser on the user's computer or mobile phone. With such a provision, a user's ability to easily match ink colors is greatly enhanced. In particular, it is no longer necessary for the user to obtain multiple sets of color swatch cards from their local paint store and then try to match the colors using their own perception. Instead, much more accurate and mathematically rigorous color matching can be achieved. In addition, no specific equipment is required to capture the image, and the user can utilize the image capture equipment that they typically must have. [0097] In order to evaluate the results of the process noted above, the RGB measured data for two mold patterns (a second mold pattern is shown in Figure 11, described previously; a first mold pattern is the same, but without the 1104 grayscale patches at the bottom) were also used as sample input. These RGB input data were used to calculate calibrated XYZ values using the methods described above. The determined XYZ calibrated colors were then numerically compared to known XYZ values to provide a measure of the effectiveness of the regression fits in the algorithm. For this purpose, two standard measures of perceptual difference, CIE dE and CIE DE2000, were used. [0098] The table below shows the average values of dE and also of DE2000 obtained for each of the methods described above. [0099] The data in the table above indicate that replacing the power curve fit to the gray scale data with the polynomial fits has little effect on the resulting X1, Y1, Z1 values with almost no effect on the average DE2000. Therefore, replacing the power curve fit to the grayscale data with the polynomial fits does not result in any significant improvement to the calibration. This may be because any diffusion in the grayscale curve fitting is taken into account in the multivariate regression process in Step 2. [00100] In terms of results, the dE difference measures are designed in such a way that the minimum visible difference to a human observer should have a dE value of 1. However, for many people, a dE of 1 is not should result in no visible difference in color and in particular if the colors are not placed side by side. In the present case, the described color determination process, when used in the template with the additional grayscale values used in the iterative regression (test 2, using the template shown in Figure 11) results in calculated values of XYZ that have a mean dE2000 less than 3 from the actual values of XYZ in each test case. [00101] As described so far, regression analysis to find the calibration coefficients employed as many samples on the card as possible across the entire color space. However, in the present embodiment, if some prior knowledge of the potential color of the unknown sample to be determined can be obtained, the regression analysis to determine the calibration coefficient can then be performed using those known color samples that are close to the color of the sample. unknown sample. This is similar to "zooming in" to that part of the color space of interest, that is, the part of the user's image capture device's color capture response that is actually of most interest in that that part was used to capture the RGB values of the unknown sample. This smaller part of the color capture response can then be characterized as closely as possible to try to improve accuracy. [00102] In more detail, the normal calibration process involves two main steps:1. Regression analysis of measured samples and their known colors ('patterns') to produce the calibration coefficients that characterize the device used to produce the image.2. The use of calibration coefficients to take a known RGB color (and position relative to the calibration structure) and produce an XYZ color. [00103] In the present embodiment, this process is extended to include a second pass: once the XYZ color of the first pass is known, a subset of known samples ('patterns') on the calibration card is then used to repeat the Step 1. In the present embodiment, standards closest to N to the calibrated color are used (from Step 2) and separate sets of closest colors are considered for the gamma correction part of the calibration (e.g. B.5.14 in Figure 5) and for the multivariate analysis part (eg Step B.5.16 in Figure 5). Additional details are shown in Figure 6. [00104] More particularly, in block 6.2, a first pass through the process of Figure 3 is performed, from blocks B.3.4 to B.3.12. That is, the calibration coefficients are found in the manner described in the preceding embodiment, using all known color samples on card 24. Next, the XYZ color of the unknown color sample is determined, in block 6.4. [00105] This information is then used to identify the colors closest to the sample of N to the identified color of XYZ of the unknown sample, in block 6.6. In this realization, the closest grayscale NG samples are found, and the closest color Nc samples, where NG is typically smaller than Nc. Details of the tests performed to determine the values for NG and Nc will be provided later. The closest grayscale samples and color samples are found using a delta_E difference measure, such as delta_E(2000). [00106] After finding the closest colors (in the grayscale sample and in the color sample), in block 6.8, the calibration is performed again, to determine the calibration coefficients again, but this time using only the closest colors found. As noted, this is similar to zooming in or focusing on a particular zone with the color space. In theory, any local effects that are present in the calibration process should therefore be taken into account. [00107] After redetermining the calibration coefficients, in block 6.10, the XYZ values of the unknown sample are then recalculated using the new calibration coefficients and the measured RGB values of the image. [00108] A series of tests were performed to evaluate the effects of this recalibration, and these tests are detailed below, with reference to Figures 18 to 21. TEST 1 [00109] As an initial evaluation of this zonal calibration method, the measured RGB values for the two test templates discussed previously (the second test template is shown in Figure 11 - the first template is identical but without the color row in grayscale in the background) were used as sample RGB values. A range of subset sizes (ie, values for NG and Nc) was tested in the second (zonal) pass, as follows. The dE and DE2000 values reported are for the given X1, Y1, and Z1 values. [00110] It is clear from the table above that, in all cases, the second zonal pass increments the mean values of dE and DE2000 (there is less diffusion). Figure 18 summarizes the data with a decrease in the number of mold colors (Nc) used in the second pass, resulting in significant improvements in DE2000. Reducing the number of colors in the mold grayscale (NG) for use in the second pass also increases the DE2000, although the effect is not as significant as that obtained by reducing the colors. TEST 2 [00111] A similar analysis was performed on the data from the second mold (shown in Figure 11). As with the first mold, the second pass results in a significant improvement in the average dE and DE2000 (see table below). The results are graphically indicated in Figure 19. [00112] Figure 19 demonstrates that reducing the number of colors (Nc) used in the calibration data subset for the second pass significantly increases the fidelity of the determined XYZ (ie, lowers the DE2000). However, reducing the number of grayscale (NG) samples for use in the power curve fitting step in the second pass has little effect on color fidelity. TEST 3 AND TEST 4 [00113] Test 3 and Test 4 use the standards in Mold 2, but additionally have "real" sample data in the image with which to evaluate the zonal calibration method. TEST 3 [00114] Test 3 is a “best case” scenario using a state-of-the-art digital camera (DigiEye) under controlled lighting conditions (D65). The results of the ten test samples are shown in the following table. [00115] Again, the second zonal pass reduces the average values of dE and DE2000, providing an improvement over the single pass arrangement. The effect on DE2000 is shown in Figure 20. Here, the reduction of NG and NC had an effect on decreasing the average values of delta_E obtained. TEST 4 [00116] Test 4 is a “realistic case” scenario using a “series produced” digital camera (Canon PowerShot S30) in good natural daylight. The results of the ten test samples are shown in the following table. [00117] The effect on DE2000 is shown in Figure21. In this test, however, there are minimums for DE2000 values in approximately 50 standards. Reducing the number of grayscale patterns for use in the second pass has little effect on DE2000. [00118] These tests show that reducing the number of NC colors used in multivariate regression has an appreciable effect on the color accuracy obtained for the unknown sample. Particularly, provided that some prior knowledge of the color of the unknown sample can be obtained by restricting the NC to the closest colors, where NC is in the range of 5 to 250 or, more preferably, from 10 to 100, or even more preferably 20 to 100 or more preferably 30 to 70 or most preferably 40 to 60 for multivariate regression, this can increase color determination accuracy. Figure 21 shows that the most accurate color determination was obtained when approximately 50 closest colors were used for the multivariate analysis, although good results with a DE2000 less than 3.0 were obtained, where a series of colors within the range from approximately 20 colors to approximately 100 colors is used. In terms of percentage, this is equal to approximately 8% to approximately 40% of the color ranges that may be available on color card 24, assuming, for example, that there are approximately 250 colors on the card. [00119] In terms of how a priori knowledge of the color of the sample can be obtained, as noted above, in the present embodiment, this is achieved by performing the first pass processing to determine the color and then performing a second pass with the reduced number of colors in the calibration step. However, this is not essential and, in other realizations, prior knowledge could be obtained in some other way. For example, in one embodiment, an assumption can be made about the nature of the characteristics of the imaging device (eg, RGB colors can be assumed to be in the sRGB color space). In another embodiment, the reduced number of colors can be achieved by choosing samples that have the measured RGB values close to the RGB color to be measured. In a further embodiment, the colors on the color card can be in a reduced range. For example, different versions of the color card can be produced, each containing a subset of the color space in it, i.e. a card that has "reds" or another card that has "blues". The user then selects the card that has the colors that are closest to the color he wants to match - for example, the user knows he wants to match a red pad and therefore uses card 24, which has predominantly reds on the same. In all these cases, a reduced set of color samples that are known to be close to the color to be determined are used to perform the calibration, and thus the local changes in the device's color capture response in that part of the data space. colors can be taken into consideration.2. SECOND ACHIEVEMENT - IMAGE ORIENTATION [00120] A second embodiment of the invention will now be described. The second embodiment of the invention takes as its basis the first embodiment described above and, therefore, the common features between them will not be described again. [00121] The second realization refers to the image orientation performed in block 3.2 of the process of Figure 3. More particularly, as described previously in the first realization, such image orientation may not have been necessary, since the user can have produced the card image by manually developing and rotating the Calibration Color 24 sample card image and the unknown sample before sending it to the final server. In this regard, when the user takes the image, he can ensure that the orientation of the card to the image plane is correct, without any perspective or skew. [00122] However, for lay users, it is preferable that no pre-processing needs to be performed by the user for the image, or that no special conditions have to be satisfied on the image orientation when generating the image. Instead, the system should be as easy for lay users to use as possible, only requiring them to take a picture of the calibration color sample card 24 with the unknown color sample, with the calibration color sample card. 24 in any orientation. By doing so, the system will be easy to understand and use by lay users and, in this way, will promote the use of the system. [00123] In the second embodiment, therefore, in order to allow easy use of the image 32 received at the end server, the system may contain an image of the calibration color sample card 24 in any orientation. However, in order to process the data in the image, the orientation of the calibration color sample card 24 and the position of the color sample patches on the card in the image must be known. Therefore, in block 3.2, the location and orientation of the card image is performed by the image orientation module 116. [00124] Figure 4 shows the operation of image guidance module 116 in more detail. First, in block 4.2, image data 32 is received from network interface 102 (or color matching control module 114). In order to locate the calibration color sample card 24 within the image, in block 4.4, edge detection is performed on the image to detect high contrast edges. In this regard, the calibration color sample card 24 has a double thick edge 248 which can be used to locate the card in the image 32, the edge of which is readily susceptible to being identified by edge detection algorithms. Once such contours in the image have been located in block 4.6, a series of nested four-sided convex contours is searched, which have the correct sequence of orientations and where each child is a significant fraction of the size of its parent. In this regard, the thick border appears after edge detection as two nested four-sided shapes, and thus the identification of such a nested shape in the image identifies card 24. [00125] After determining the position of card 24 in the image using the above, the image can be segmented to leave the image data of card 46 as shown. It is then necessary to identify the known features on the card in order to perform a perspective transformation to deskew the image. Therefore, in block 4.8, known card features are identified, such as card edges. Note that it is possible to use any fidelity marker to identify fixed points on the calibration card, but that, in the present embodiment, only four points on the card are required to be identified in order to perform the perspective transformation. [00126] After identifying the known points in the card image, in block 4.10, the known points are used (for example, the corners of the innermost edge) to perform a perspective transformation to deskew the image. The deskewed card image 50 is shown as an example in Figure 4. However, this deskewed card image 50 could have any rotation orientation, so prior knowledge of the expected layout of the card is used to correctly orient the card. In this regard, the color card data 112 is stored in the color database 110, which stores information related to the location of a fidelity feature that can be recognized and used to orient the card. For example, the barcode or trademark along an edge of the structure has white areas next to it. Therefore, you can look at the two lightest corners and rotate the image to bring them to the bottom. Thus, in block 4.12, a known feature that is related to the rotation orientation of the card is recognized, and the deskewed card image 50 is then rotated in block 4.14 in such a way that the feature is placed in the known rotation orientation, thereby rotationally orienting the card. In this way, the image data of the card 34 of known orientation is obtained. [00127] In other embodiments, it is possible to use any known characteristics of the card to obtain the rotation orientation. This can also be achieved by making one of the fidelity characteristics different from the others. Another possibility is to make the arrangement of the samples on the card rotationally symmetrical, so that the rotation orientation of the card is unimportant. [00128] The overall result of the above steps is that the user does not need to intervene to find the card in the image, and no special requirements are imposed on the user as to how the image should be taken or pre-processed before sending it to the final server . In this way, a much more user-friendly system is obtained, which will probably be more used by lay users.3. THIRD ACHIEVEMENT — SPATIAL BRIGHTNESS CORRECTION [00129] A third embodiment of the invention will now be described. The third embodiment takes as its basis the first or second embodiments described previously and, thus, the common features between them will not be described again. [00130] The third embodiment of the invention is focused on improving the determination of the calibration coefficients performed in block 3.4 of the process of Figure 3 and, in particular, takes into account the differences in brightness and contrast across the image of the card 34. This that is, the user may have generated the image 32 under imperfect lighting conditions such that across the card 24 there are differences in lighting such that the brightness and contrast across the card are not uniform. The present embodiment of the invention therefore presents additional processing that can be performed at the calibration stage to extend the calibration model taking into account such spatial differences in illumination. The embodiment presents a method that assumes a linear change in contrast brightness across the card, although it is possible to find higher order coefficients that model higher order changes. [00131] Figure 7 illustrates the process in more detail. The process comprises two main steps (B.7.6 and B.7.10). First, in block 7.2, the samples Ri, Gi and Bi in (xi, yi) in the image are measured, and the corresponding values of XYZ Xi, Yi and Zi are obtained from the color card data in the color database. . Then the respective relationships are found, which plot the known X to the measured R, taking into account the position (x,y) of each measured R in the image of card 34. The same is also done to plot the known Y to the G measured, and Z known to B. That is, considering XR in more detail, a relationship is formulated which relates X to R using a power adjustment, but where the coefficient of R depends on the position in the card image. In addition, a displacement term is also introduced into the equation, which is also position dependent. That is, the relationship to be found between X and R depends on the position and depends on the position of the samples on the card. Similar position-dependent relationships are also found between Y and G and Z and B. In the present embodiment, the following equations are used: [00132] where αx, Y, Z βx, Y, Z, Zx, Y, Z, nx, Y, Z, Yx, Y, Z, δX, Y, Z, and εX, Y, Z are fixed coefficients and (xi, yi) is the position of the in-sample on the card, where Ri, Gi, and Bi are the measured RGB values of the in-sample. However, in other embodiments, different equations can be used - any relationship that takes into account the position of the samples on the card can be used. [00133] The above equations are solved using the least squares fit method in B.7.6 to determine the values for αx, Y, z βx, Y, z, Zx, Y, z, nx, Y, z, Yx, Y , z, δx, Y, z, and εx, Y, z. However, it may be that, without any prior knowledge, these equations are not easily solved (local maxima or minima can be found). Therefore, optionally (in block 7.4), the coefficients αx, Y, z βx, Y, z can be found in advance using the grayscale samples in the image without depending on the position and performing a (least squares) fit of a curve of power to . against ■■■■■: and then similarly for against and - against s , providing the six coefficients : [00134] Note that these equations do not take into account any spatial brightness distribution, but are performed to provide initial values of αX, Y, Z βX, Y, Z which can then be used to solve the position-dependent equations. [00135] Then in block 7.8, these 21 coefficients (7 per channel - αx, Y, Z βx, Y, Z, Zx, Y, Z, nx, Y, Z, Yx, Y, Z, δX, Y , Z, and εX, Y, Z.) are used to calculate values for all known samples in the image - not just the grayscale samples. These are then used for a multivariate fit in block 7.10 - essentially running least squares of these samples against the measured values (X^Z,) using the equation: [00136] The multivariate fit then provides another 12 additional coefficients (ax, Y, z bx, Y, z, cx, Y, z, dx, Y, z). The sets of 21 coefficients αx, Y, z βx, Y, z, Zx, Y, z, nx, Y, z, Yx, Y, z, δx, Y, z, and εx, Y, z. and 12 coefficients ax, Y, z bx, Y, z, cx, Y, z, dx, Y, z are then stored as the calibration data 36. These 21 + 12 coefficients can then be used subsequently (in B.3.12 in Figure 3) to calculate the value color -:S of interest using the equations above. [00137] Thus, in the third embodiment, the calibration process is adapted to account for variations in brightness and contrast on the card 24 in the image. This makes the system even easier to use, and imposes few limitations on lighting the imaged scene while still allowing good results to be obtained. 4. FOURTH PERFORMANCE - AGLOMERATION TO FIND MULTIPLE COLORS IN THE SAMPLE [00138] A fourth embodiment of the invention will now be described. The fourth embodiment takes as its basis any one of the first, second or third embodiments already described, and thus the common elements between them will not be discussed again. [00139] The fourth embodiment of the invention presents a technique that can be used, for example, in block 3.6 of the process of Figure 3, where there is more than one color in the unknown color sample. For example, the user may have placed card 24 over an item that is patterned and that while there is a dominant color in the pattern, there are also a number of secondary colors. In such a case, a determination must be made as to which color should be matched. In the first realization, the option of identifying a single dominant color was presented, through the user's choice of a color or through the determination of a dominant color using statistical measurements in the pixels that represent the sample. In the fourth embodiment, however, an agglomeration algorithm is used to attempt to identify each of the several colors in the unknown color sample, so that individual XYZ determination and matching can then be performed on each individual color. [00140] Within the fourth realization, a k-means clustering algorithm is used to determine the main colors that are present in a sample image. K-means clustering is based on Euclidean distances between pixel values. In the RGB space, the differences are not observed to be equal. This means that the two pixels that are very close together in the RGB space may appear to be very different colors or very similar colors. To overcome this, pixels are converted to L*a*b* space, which is more perceptually uniform, so that the perceived difference between pixels is relatively consistent across the color space. This process is performed on the image once it has been deskewed, and preferably once the variation in illumination across the card has been eliminated (i.e., operates on the calibrated colors from the image). [00141] An iterative process is used to determine how many clusters are present in the portion of the image that represents the unknown sample and what average color is in each cluster. The first iteration is the simplest because it is assumed that there is only one cluster of pixels in the sample. This means that the k-means algorithm must return a cluster that contains all pixels. The average L*a*b* value of the pixels in the image is considered, and the number of pixels within a certain distance from this average is then calculated. If the number of pixels found is above a threshold, then it is assumed that there is only one color in the image; however, if the number of pixels is below the threshold, then the k-means algorithm runs on the image, trying to group the pixels into two clusters. The average L*a*b* value of each cluster is calculated, and the number of pixels present within a given distance from this pixel value is counted. Two calculations are performed to check whether this is significant - the first checks whether most of the pixels in that cluster are within a given distance (i.e. that the average is a good representation of that cluster), and that cluster is ignored if not there are enough pixels within a given distance. The second calculation is that the number of pixels within a given distance of the average of all valid clusters must be higher than a threshold (i.e., checking that enough pixels have been observed to be sure that the colors dominants were identified). If the number of counted pixels is lower than this threshold, then the k-means algorithm is rerun, attempts are made to group the pixels into three clusters instead of two, and the analysis is repeated. [00142] The following algorithm is used to find clusters, and this is shown in more detail in Figure 8. The algorithm has several adjustable parameters: Maximum radius of delta-E (dE_thresh) Required fraction of the image (F_img) Minimum fraction in the set ( F_cluster) Max clusters to try (N_max) [00143] and these are set for a particular implementation in block 8.2. Experimentation will indicate the appropriate values for the adjustable parameters. [00144] The algorithm is as follows:1. Start with a cluster (ie all pixels in the sample) (block 8.4).2. If the number of clusters is greater than N_max, go to Step 5 (block 8.6).3. Calculate the following statistics for each cluster (block 8.8).a. Average pixel value (L*a*b*) (block 8.10);b. Number of pixels within dE_thresh of the average pixel value (P_thresh) (block 8.12).4. If Sum(P_thresh)/(Number of pixels in the image) is less than F_img (block 8.14), increase the number of clusters by 1 and go to Step 2 (block 8.16).5. Filter the clusters to include only those that have P_thresh/(number of pixels in the set) > F_cluster (block 8.20). [00145] Although in the above cases reference is made to color values in Lab space, the algorithm can also be executed using XYZ values, since the two sets of color data are mathematically related. [00146] Figures 9 and 10 illustrate the algorithm's operation graphically. In Figure 9(a), a cluster 92 is identified, but the cluster fails the density threshold test since a very high percentage of pixels lie outside the dE_thresh distance from the cluster mean. In Figure 9(b), an attempt is made to cluster the distribution into two clusters, but cluster 94 is invalid since insufficient pixels are found within the cluster radius. Furthermore, the sample as a whole fails to pass the threshold for the entire sample image, since many pixels are not in the valid clusters. Therefore, the number of clusters is increased to 3 and clustering is performed again. [00147] Figure 10 illustrates the same distribution as Figure 9(b), but with three clusters. In part (a) of Figure 10, the number of pixels within a distance of the mean is not high enough to pass using two clusters in the k-means algorithm, so the analysis is again performed using three clusters. Then the number of pixels within the fixed distance is sufficiently high so that the three colors found in the image are the averages of each cluster of pixels. In this case, clusters 1010, 1020 and 1030 can be identified, each of which corresponds to the applied limit tests. [00148] Various modifications may be made to the above-described embodiments to provide additional embodiments. For example, the second to fourth embodiments are each described as being based on the first embodiment. In the first embodiment, the image is transmitted over a telecommunications network to a final server for processing. In variants of the first to fourth embodiments, however, this may not be the case. Instead, a program may be made available for download to a user's computer or mobile phone, which can perform the processing operations described. In this way, the user's computer or phone can calculate the color of the unknown sample from the image taken and optionally suggest ink color matches, without any image data having to be sent over a network. [00149] Furthermore, in the above-described embodiments, the image that is taken contains the card 24 and the unknown sample. However, this is not essential. In other embodiments, two separate images can be provided spaced in time. A first image may be from the card 24 and this is used to find the calibration coefficients for the user's imaging device. A separate image can then contain the unknown sample, and the calibration coefficients found from the first image are then applied to the RGB values of the unknown sample in the second image. However, this arrangement is less preferable to the arrangement described above, since for accuracy the lighting conditions of the first and second images need to be kept substantially identical. However, this obstacle is removed if a single image containing the calibration card 24 and the sample is taken. [00150] Various additional modifications, such as addition, deletion or substitution, will be apparent to the intended reader, who is skilled in the art, to provide additional examples, which are to be found within the appended claims.
权利要求:
Claims (12) [0001] 1. METHOD, characterized by comprising: receiving the first image data that is related to an unknown color sample, for which the colorimetry data must be determined; receiving the second image data that is related to a plurality of known calibration color samples, for which the colorimetry data is already known; determining a plurality of color calibration characteristics that relate color measurements of the known calibration color samples of the second image data to the colorimetry data known counterparts from the calibration color samples; and the calculation of the colorimetry data of the unknown color sample depending on the color measurements of the same from the first image data and the determined color calibration characteristics; where the color calibration characteristics are determined using N color samples of known calibration, where N is less than the total number of known calibration color samples across the entire color space, where the N known calibration color samples are those N samples that are the closest in the color space color to an estimated color from the unknown color swatch, where the estimated color is obtained by determining a first set of calibration characteristics using all available known calibration color swatches, and by calculating the estimated color using the first set of characteristics of calibration. [0002] 2. METHOD according to claim 1, characterized in that the estimated color is obtained using the above steps during a first processing pass, and a second processing pass using the above steps is then performed, using the N calibration color samples closer to the estimated color than the set of known calibration color samples. [0003] METHOD according to claim 1, characterized in that the N known calibration color samples are those N samples used within a confined color space which is known to be represented by the second image data. [0004] 4. METHOD according to claim 1, characterized in that the N samples of known calibration colors are those N samples that have color values measured from the second image data that are most similar to the measured color value of the unknown sample. of the first image data. [0005] 5. METHOD, according to claim 4, characterized in that the measured color values are RGB or sRGB values. [0006] 6. METHOD, according to any one of claims 1 to 3, characterized in that N is comprised in a range from 5 to 250. [0007] 7. METHOD, according to claim 6, characterized in that N is comprised in a range from 20 to 85. [0008] 8. DEVICE, which comprises: at least one processor; and at least one memory that includes instructions wherein the at least one memory and the instructions are configured to, with the at least one processor, cause the device to do at least the following: i) receive the first image data that is related to an unknown color sample, for which the colorimetry data is to be determined, and the second image data that is related to a plurality of known calibration color samples, for which the colorimetry data is already known; ii) determining a plurality of color calibration characteristics that relate color measurements of the known calibration color samples of the second image data to the corresponding known colorimetry data of the calibration color samples; and iii) calculate the colorimetry data of the unknown color sample depending on the color measurements of the same from the first image data and the determined color calibration characteristics; characterized by the color calibration characteristics being determined using N color samples of known calibration, where N is less than the total number of known calibration color samples across the entire color space, where the N known calibration color samples are those N samples that are the closest in the color space colors to an estimated color from the unknown color swatch, where the estimated color is obtained by determining a first set of calibration characteristics using all available known calibration color swatches, and by calculating the estimated color using the first set of characteristics of calibration. [0009] 9. APPARATUS according to claim 8, characterized in that the estimated color is obtained using the above steps during a first processing pass, and a second processing pass using the above steps is then performed, using the N calibration color samples plus closer to the estimated color than the set of known calibration color samples. [0010] 10. APPARATUS according to claim 8, characterized in that the N samples of colors of known calibration are those N samples used within a confined color space which is known to be represented by the second image data. [0011] 11. APPLIANCE according to claim 8, characterized in that the N samples of known calibration colors are those N samples that have color values measured from the second image data that are most similar to the measured color value of the unknown sample. of the first image data. [0012] 12. APPLIANCE according to claim 11, characterized in that the measured color values are RGB or sRGB values.
类似技术:
公开号 | 公开日 | 专利标题 BR112012017253B1|2022-02-15|Method and apparatus for determining colorimetry data of a color sample from an image of the same JP5174307B2|2013-04-03|Color signal processing US7522767B2|2009-04-21|True color communication US9514535B1|2016-12-06|Color calibration method of camera module US9270866B2|2016-02-23|Apparatus and method for automated self-training of white balance by electronic cameras Wighton et al.2011|Chromatic aberration correction: an enhancement to the calibration of low‐cost digital dermoscopes AU2005255992A1|2005-12-29|Method and device for colour calibrating a camera and/or a display device and for correcting colour defects from digital images US20150304620A1|2015-10-22|Color Calibration and Use of Multi-LED Flash Modules CN109459136A|2019-03-12|A kind of method and apparatus of colour measurement TWI539812B|2016-06-21|Automatic white balance methods for electronic cameras Lecca2014|On the von Kries model: estimation, dependence on light and device, and applications Funt et al.2014|Irradiance‐independent camera color calibration CN113628135A|2021-11-09|Image color correction method, image color correction device, computer device, and storage medium BR112012016626B1|2022-02-08|METHOD AND APPLIANCE TO DETERMINE COLORIMETRY DATA OF A COLOR SAMPLE FROM AN IMAGE OF THE SAME Bastani et al.2014|Simplifying irradiance independent color calibration WO2020049779A1|2020-03-12|Color evaluation device, color evaluation method, and indication object used in color evaluation method KR20140018611A|2014-02-13|Apparatus and method for providing dyed material information US20210102842A1|2021-04-08|Method and apparatus for color lookup using a mobile device CN113674164A|2021-11-19|Sample color correction method, sample color correction device, electronic device, and medium WO2021066822A1|2021-04-08|Method and apparatus for color lookup using a mobile device WO2008108761A1|2008-09-12|True color communication AU2008249442A1|2010-06-10|Chart linearity measurement in variable lighting
同族专利:
公开号 | 公开日 CN103119923B|2015-10-14| US20120288195A1|2012-11-15| WO2011089096A1|2011-07-28| US20130136349A1|2013-05-30| GB201000835D0|2010-03-03| CA2786955C|2017-11-28| RU2012134189A|2014-02-27| PL2526684T3|2019-09-30| US20130148885A1|2013-06-13| RU2012134363A|2014-02-27| CN102714689B|2016-01-20| SG182342A1|2012-08-30| EP2526682A2|2012-11-28| BR112012017252A2|2017-12-19| CA2786960A1|2011-07-28| US8885934B2|2014-11-11| ZA201205193B|2013-03-27| BR112012017253A2|2017-12-19| CN102714687A|2012-10-03| CA2786966A1|2011-07-28| ZA201205340B|2013-03-27| EP2526684A1|2012-11-28| CA2786955A1|2011-07-28| RU2550150C2|2015-05-10| RU2012134361A|2014-02-27| EP2526683A1|2012-11-28| BR112012016626A2|2016-04-19| RU2573255C2|2016-01-20| SG182369A1|2012-08-30| WO2011089094A1|2011-07-28| RU2567863C2|2015-11-10| BR112012016753A2|2019-09-24| EP2526683B1|2016-01-13| EP2526685A1|2012-11-28| CN102714688A|2012-10-03| MY162377A|2017-06-15| CN102714687B|2016-05-04| US20120294517A1|2012-11-22| WO2011089093A3|2015-06-25| TR201910137T4|2019-07-22| CA2786964A1|2011-07-28| CN102714689A|2012-10-03| SG182368A1|2012-08-30| WO2011089093A2|2011-07-28| WO2011089095A1|2011-07-28| SG182340A1|2012-08-30| EP2526684B1|2019-04-10| CN103119923A|2013-05-22| RU2012134347A|2014-03-20| RU2567500C2|2015-11-10|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 SU458719A1|1973-01-19|1975-01-30|Харьковский Институт Радиоэлектроники|Electronic Colorimeter| EP0144461B1|1983-12-14|1987-05-20|DR.-ING. RUDOLF HELL GmbH|Method of and circuit arrangement for the recognition of chrominances and colours| US5150199A|1990-01-02|1992-09-22|Megatronics, Inc.|Method for correlating color measuring scales| US5956044A|1993-05-07|1999-09-21|Eastman Kodak Company|Imaging device to media compatibility and color appearance matching with flare, luminance, and white point comparison| US5793884A|1995-12-19|1998-08-11|Hewlett-Packard Company|Spectral based color image editing| JP3943167B2|1996-09-10|2007-07-11|富士フイルム株式会社|Color conversion method| US6271920B1|1997-12-19|2001-08-07|Chromatics Color Sciences International, Inc.|Methods and apparatus for color calibration and verification| US6404517B1|1998-03-31|2002-06-11|Seiko Epson Corporation|Color-patch sheet registration| RU2251084C2|1999-10-05|2005-04-27|Акцо Нобель Н.В.|Method of selecting color by means of electronic representation forming device| US6594383B1|1999-11-16|2003-07-15|International Business Machines Corporation|Method and apparatus for indexing and retrieving images from an images database based on a color query| AUPQ684100A0|2000-04-10|2000-05-11|Rising Sun Pictures Pty Ltd|System and means for matching colour on an output path to a viewing path| US6628829B1|2000-08-08|2003-09-30|Richard Jeffrey Chasen|Method and system for matching a surface color| GB2367731B|2000-10-03|2005-05-11|Colour Comm Ltd|Colour system| US7227673B2|2001-01-23|2007-06-05|Hewlett-Packard Development Company, L.P.|Color measurement with distributed sensors in a color hard copy apparatus| JP2002226735A|2001-02-02|2002-08-14|Nippon Paint Co Ltd|Computer-aided color matching method for coating liquid and manufacturing method of coating using the same| US7209147B2|2001-03-15|2007-04-24|Kodak Polychrome Graphics Co. Ltd.|Correction techniques for soft proofing| US6751348B2|2001-03-29|2004-06-15|Fotonation Holdings, Llc|Automated detection of pornographic images| WO2003029766A2|2001-10-04|2003-04-10|Digieye Plc|Apparatus and method for measuring colour| US6844881B1|2002-03-29|2005-01-18|Apple Computer, Inc.|Method and apparatus for improved color correction| GB0219346D0|2002-08-19|2002-09-25|Ici Plc|A method for obtaining an approximate standard colour definition for a sample colour| WO2004028144A1|2002-09-17|2004-04-01|Qp Card|Method and arrangement for colour correction of digital images| WO2004056135A1|2002-12-13|2004-07-01|Color Savvy Systems Limited|Method for using an electronic imaging device to measure color| GB0326207D0|2003-04-30|2003-12-17|Ici Plc|A method for imparting expected colour to a surface of large area| US7466415B2|2003-05-07|2008-12-16|E.I. Du Pont De Nemours & Company|Method of producing matched coating composition and device used therefor| US7193632B2|2003-11-06|2007-03-20|Behr Process Corporation|Distributed color coordination system| JP2005210370A|2004-01-22|2005-08-04|Konica Minolta Photo Imaging Inc|Image processor, photographic device, image processing method, image processing program| US20050196039A1|2004-03-02|2005-09-08|Wolfgang Bengel|Method for color determination using a digital camera| JP4285290B2|2004-03-22|2009-06-24|富士ゼロックス株式会社|Image processing apparatus, image processing method, and program| FI20040834A0|2004-06-17|2004-06-17|Petri Piirainen|Method for color calibrating camera and / or display device| US20060139665A1|2004-12-27|2006-06-29|Lexmark International, Inc.|System and method for printing color samples to match a target color| US7953274B2|2005-03-18|2011-05-31|Valspar Sourcing, Inc.|Digital method for matching stains| US7522768B2|2005-09-09|2009-04-21|Hewlett-Packard Development Company, L.P.|Capture and systematic use of expert color analysis| US20070058858A1|2005-09-09|2007-03-15|Michael Harville|Method and system for recommending a product based upon skin color estimated from an image| US8743137B2|2006-04-10|2014-06-03|Edgenet, Inc.|Method for electronic color matching| RU2327120C2|2006-06-27|2008-06-20|Общество с ограниченной ответственностью "Центр шпалерного ткачества "Мильфлер" ООО "Центр шпалерного ткачества "Мильфлер"|Colour matching technique through analysis of electronic image| EP2063393B1|2006-06-29|2013-01-16|Fujitsu Ltd.|Color classifying method, color recognizing method, color classifying device, color recognizing device, color recognizing system, computer program, and recording medium| WO2008108763A1|2007-03-08|2008-09-12|Hewlett-Packard Development Company, L.P.|Method and system for skin color estimation from an image| EP2176854B1|2007-07-11|2021-07-07|Benjamin Moore&Co.|Color selection system| JP5887998B2|2011-03-17|2016-03-16|株式会社リコー|Color measuring device, recording device, color measuring method and program| JP5834938B2|2012-01-17|2015-12-24|株式会社リコー|Spectral characteristic acquisition apparatus, image evaluation apparatus, image forming apparatus, and spectral characteristic acquisition method|EP2845164B1|2012-04-30|2018-06-06|Dolby Laboratories Licensing Corporation|Reference card for scene referred metadata capture| FR2990822B1|2012-05-18|2014-05-09|Franck Hennebelle|METHOD OF CALIBRATING AN EMISSIVE SCREEN AND AN ELECTRONIC COLOR IMAGE SENSOR| US9311520B2|2012-08-08|2016-04-12|Scanadu Incorporated|Method and apparatus for performing and quantifying color changes induced by specific concentrations of biological analytes in an automatically calibrated environment| US9285323B2|2012-08-08|2016-03-15|Scanadu Incorporated|Quantifying color changes of chemical test pads induced concentrations of biological analytes under different lighting conditions| US9528941B2|2012-08-08|2016-12-27|Scanadu Incorporated|Method and apparatus for determining analyte concentration by quantifying and interpreting color information captured in a continuous or periodic manner| US8915435B2|2013-01-25|2014-12-23|Hewlett-Packard Development Company, L.P.|Characterization of color charts| US9163991B2|2013-01-30|2015-10-20|Hewlett-Packard Development Company, L.P.|Color space color value determination from digital image| US8976252B2|2013-01-30|2015-03-10|Hewlett-Packard Development Company, L.P.|Acquisition of color calibration charts| US8872923B2|2013-02-20|2014-10-28|Hewlett-Packard Development Company, L.P.|Color calibration chart acquisition| US20140232923A1|2013-02-20|2014-08-21|Hewlett-Packard Development Company, Llp|Device independent color differences| WO2014178062A2|2013-02-25|2014-11-06|Biosense Technologies Private Ltd|Method and system for analysing body fluids| US9403189B2|2013-04-11|2016-08-02|Thomas John VanGemert|Fiberglass gel coat color match and repair system and method utilizing a multi chamber dispenser device| JP6151077B2|2013-04-19|2017-06-21|大和ハウス工業株式会社|Deterioration judgment method for exterior materials| US9794528B2|2013-09-11|2017-10-17|Color Match, LLC|Color measurement and calibration| US10469807B2|2013-09-11|2019-11-05|Color Match, LLC|Color measurement and calibration| EP3058324A4|2013-10-08|2017-09-20|Paul Peng|System, method and apparatus for performing colour matching| US9465995B2|2013-10-23|2016-10-11|Gracenote, Inc.|Identifying video content via color-based fingerprint matching| JP5922160B2|2014-01-30|2016-05-24|シャープ株式会社|Display calibration system, program, recording medium| US9183641B2|2014-02-10|2015-11-10|State Farm Mutual Automobile Insurance Company|System and method for automatically identifying and matching a color of a structure's external surface| US10107686B1|2014-03-03|2018-10-23|Ayalytical Instruments, Inc.|Vision strip analyzer| CN104165696A|2014-05-28|2014-11-26|昆明理工大学|Material surface color feature on-line automatic detection method| US8908986B1|2014-07-23|2014-12-09|Teespring, Inc.|Systems and methods for selecting ink colors| US9863811B2|2014-08-15|2018-01-09|Scanadu Incorporated|Precision luxmeter methods for digital cameras to quantify colors in uncontrolled lighting environments| JP6345612B2|2015-02-06|2018-06-20|株式会社ソニー・インタラクティブエンタテインメント|Imaging apparatus, information processing system, mat, and image generation method| CN104851115B|2015-05-18|2017-09-22|成都平行视野科技有限公司|It is a kind of that the method that arbitrary image color maps filter is calculated by Function Fitting| MX2017015015A|2015-05-22|2018-04-10|Ppg Ind Ohio Inc|Identifying home dã‰cor items and paint colors based on colors in an image.| EP3298569A1|2015-05-22|2018-03-28|PPG Industries Ohio, Inc.|Analyzing user behavior at kiosks to identify recommended products| US11238511B2|2015-05-22|2022-02-01|Ppg Industries Ohio, Inc.|Home Décor color matching| GB201516842D0|2015-09-23|2015-11-04|Lawn Richard F|A device for measuring colour properties| US10354412B2|2015-11-30|2019-07-16|Detectachem, Inc.|Receptacle for detection of targeted substances| JP6891189B2|2016-03-04|2021-06-18|スリーエム イノベイティブ プロパティズ カンパニー|Equipment, systems, and recording media for measuring color differences| US20170270679A1|2016-03-21|2017-09-21|The Dial Corporation|Determining a hair color treatment option| DE102016206368A1|2016-04-15|2017-10-19|Henkel Ag & Co. Kgaa|Color chart device and method for color recognition| US10845245B2|2016-04-20|2020-11-24|Leica Biosystems Imaging, Inc.|Digital pathology color calibration and validation| JP6701989B2|2016-06-06|2020-05-27|セイコーエプソン株式会社|Color chart generation method for color selection and color chart generation program for color selection| GB201702649D0|2017-02-17|2017-04-05|Barco Nv|Laser driver| US10552985B2|2017-06-14|2020-02-04|Behr Process Corporation|Systems and methods for determining dominant colors in an image| CN107389194B|2017-07-12|2018-10-12|蔚彩信息科技(上海)有限公司|Color high-fidelity digital imaging system under opening luminous environment based on colour atla| IT201700096325A1|2017-08-25|2019-02-25|Della Toffola Spa|Method and control system for winemaking| EP3676758A4|2017-08-31|2021-05-12|Twine Solutions Ltd.|Color detection algorithm| US10762617B2|2017-10-03|2020-09-01|Ecolab Usa Inc.|Methods and system for performance assessment of cleaning operations| USD872072S1|2017-10-04|2020-01-07|Ecolab Usa Inc.|Mounting stand for image capture using a mobile device| US11062479B2|2017-12-06|2021-07-13|Axalta Coating Systems Ip Co., Llc|Systems and methods for matching color and appearance of target coatings| JP2021511610A|2018-01-26|2021-05-06|ウニベルシタ デ バルセローナ|Color correction| US10648862B2|2018-04-03|2020-05-12|Microsoft Technology Licensing, Llc|Color sensing ambient light sensor calibration| US10267677B1|2018-04-03|2019-04-23|Microsoft Technology Licensing, Llc|Calibrated brightness estimation using ambient color sensors| WO2020064457A1|2018-09-29|2020-04-02|Covestro Deutschland Ag|Product color correcting method and product color correcting system| EP3680634A1|2019-01-11|2020-07-15|Covestro Deutschland AG|Product color correcting method and product color correcting system| CN110083430A|2019-04-30|2019-08-02|成都市映潮科技股份有限公司|A kind of system theme color replacing options, device and medium| WO2021209413A1|2020-04-16|2021-10-21|Akzo Nobel Coatings International B.V.|Method for determining representative colours from at least one digital colour image| CN111426386B|2020-04-26|2021-03-30|杭州小肤科技有限公司|Colorimetric method, device and medium combining two-dimensional code colorimetric card| WO2021249895A1|2020-06-09|2021-12-16|F. Hoffmann-La Roche Ag|Method of determining the concentration of an analyte in a sample of a bodily fluid, mobile device, kit, comuter program and computer-readable storage medium|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-03-31| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-04-22| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: H04N 1/60 , G01J 3/52 Ipc: H04N 1/60 (2006.01), G01J 3/02 (2006.01), G01J 3/4 | 2021-09-08| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]| 2021-12-21| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2022-02-15| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 17/01/2011, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 GBGB1000835.7A|GB201000835D0|2010-01-19|2010-01-19|Method and system for determining colour from an image| GB1000835.7|2010-01-19| PCT/EP2011/050534|WO2011089095A1|2010-01-19|2011-01-17|Method and system for determining colour from an image| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|